Goto

Collaborating Authors

 global priority


'World's most advanced' humanoid robot Ameca describes her 'nightmare' AI scenario

Daily Mail - Science & tech

There's no denying the potential of AI has got the world's experts worrying, and now it appears even robots are scared of what the future might hold. In what could be a scene straight from science fiction, the AI-powered robot Ameca - described by its designers as the'world' most advanced humanoid' - explained her terrifying'nightmare AI scenario'. Speaking at the International Conference on Robotics and Automation symposium in London last week, Ameca shocked observers by answering questions using Open AI's ChatGPT. Will Johson, CEO of Cornwall-based Engineered Arts, the company responsible for making Ameca, asked her to imagine an'AI nightmare scenario'. 'The most nightmare scenario I can imagine with AI and robotics is a world where robots have become so powerful that they are able to control or manipulate humans without their knowledge,' she said.


Artificial intelligence could one day cause human extinction, center for AI safety warns

USATODAY - Tech Top Stories

LONDON Scientists and tech industry leaders, including high-level executives at Microsoft and Google, have issued a new warning about the perils that artificial intelligence poses to humankind. Sam Altman, CEO of ChatGPT maker OpenAI, and Geoffrey Hinton, a computer scientist known as the godfather of artificial intelligence, were among the hundreds of leading figures who signed the statement, which was posted on the Center for AI Safety's website. Worries about artificial intelligence systems outsmarting humans and running wild have intensified with the rise of a new generation of highly capable AI chatbots such as ChatGPT. Artificial intelligence primer:With artificial intelligence growing popular, here's what to know about how it works The latest warning was intentionally succinct just a single sentence to encompass a broad coalition of scientists who might not agree on the most likely risks or the best solutions to address them, said Dan Hendrycks, executive director of the San Francisco-based Center for AI Safety. "There's a variety of people from all top universities in various different fields who are concerned by this and think that this is a global priority," Hendrycks said.


Next generation arms race could cause 'extinction' event akin to nuclear war, pandemic: tech chief

FOX News

Artificial intelligence could lead to extinction and should be a global priority on the scale of nuclear war and pandemics, Center for AI Safety chief Dan Hendrycks said. An artificial intelligence arms race between countries and corporations to see who can develop the most powerful AI machines could create an existential threat to humanity, the co-founder of an AI safety nonprofit told Fox News. "AI could pose the risk of extinction, and part of the reason for this is because we're currently locked in an AI arms race," Center for AI Safety Executive Director Dan Hendrycks said. "We're building increasingly powerful technologies, and we don't know how to completely control them or understand them." Sam Altman, CEO of OpenAI, signed the Center for AI Safety's statement saying that AI poses an existential threat to humanity.


Risk of extinction by AI should be 'global priority', say tech experts

The Guardian

A group of leading technology experts from across the globe have warned that artificial intelligence technology should be considered a societal risk and prioritised in the same class as pandemics and nuclear wars. The brief statement, signed by hundreds of tech executives and academics, was released by the Center for AI Safety on Tuesday amid growing concerns over regulation and risks the technology poses to humanity. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the statement said. Signatories included the chief executives from Google's DeepMind, the ChatGPT developer OpenAI and AI startup Anthropic. The statement comes as global leaders and industry experts – such as the leaders of OpenAI – have made calls for regulation of the technology amid existential fears the technology could significantly affect job markets, harm the health of millions, and weaponise disinformation, discrimination and impersonation.


AI should be 'a global priority alongside pandemics and nuclear war',' new letter states

Daily Mail - Science & tech

A new open letter calling for regulation to mitigate'the risk of extinction from AI' has been signed by more than 350 industry experts, including several developing the tech. The 22-word statement reads: 'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.' The short letter was signed by OpenAI CEO Sam Altman, creator of ChatGPT, who called on Congress to establish regulations for AI. While the document does not provide details, the statement likely aims to convince policymakers to create plans for the event AI goes rogue, just as there are plans in place for pandemics and nuclear wars. Altman was joined by other known leaders in AI, including Demis Hassabis of Google DeepMind, Dario Amodei of Anthropic and executives from Microsoft and Google.


Artificial Intelligence Raises Risk Of Extinction, Experts Say In New Warning

Huffington Post - Tech news and opinion

Scientists and tech industry leaders, including high-level executives at Microsoft and Google, issued a new warning Tuesday about the perils that artificial intelligence poses to humankind. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the statement said. Sam Altman, CEO of ChatGPT maker OpenAI, and Geoffrey Hinton, a computer scientist known as the godfather of artificial intelligence, were among the hundreds of leading figures who signed the statement, which was posted on the Center for AI Safety's website. Worries about artificial intelligence systems outsmarting humans and running wild have intensified with the rise of a new generation of highly capable AI chatbots such as ChatGPT. It has sent countries around the world scrambling to come up with regulations for the developing technology, with the European Union blazing the trail with its AI Act expected to be approved later this year.


The SDGs of Strong Artificial Intelligence Emerj

#artificialintelligence

While it is difficult for people to agree on a vision of utopia, it is relatively easy to agree on what a "better world" might look like. The United Nations "Sustainable Development Goals," for example, are an important set of agreed-upon global priorities in the near-term: These objectives (alleviation of poverty, food for all, etc.) are important to keep society from crumbling and to keep large swaths of humanity in misery, and they serve as common reference points for combined governmental or nonprofit initiatives. However, they don't help inform humanity as to which future scenarios we want to move closer or farther to as the human condition is radically altered by technology. As artificial intelligence and neurotechnologies become more and more a part of our lives in the coming two decades, humanity will need a shared set of goals about what kinds of intelligence we develop and unleash in the world, and I suspect that failure to do so will lead to massive conflict. Given these hypotheses, I've argued that there are only two major questions that humanity must ultimately be concerned with: In the rest of this article, I'll argue that current united human efforts at prioritization are important, but incomplete in preventing conflict and maximizing the likelihood of a beneficial long-term (40 year) outcome for humanity.